6 research outputs found

    The Consensus Game: Language Model Generation via Equilibrium Search

    Full text link
    When applied to question answering and other text generation tasks, language models (LMs) may be queried generatively (by sampling answers from their output distribution) or discriminatively (by using them to score or rank a set of candidate outputs). These procedures sometimes yield very different predictions. How do we reconcile mutually incompatible scoring procedures to obtain coherent LM predictions? We introduce a new, a training-free, game-theoretic procedure for language model decoding. Our approach casts language model decoding as a regularized imperfect-information sequential signaling game - which we term the CONSENSUS GAME - in which a GENERATOR seeks to communicate an abstract correctness parameter using natural language sentences to a DISCRIMINATOR. We develop computational procedures for finding approximate equilibria of this game, resulting in a decoding algorithm we call EQUILIBRIUM-RANKING. Applied to a large number of tasks (including reading comprehension, commonsense reasoning, mathematical problem-solving, and dialog), EQUILIBRIUM-RANKING consistently, and sometimes substantially, improves performance over existing LM decoding procedures - on multiple benchmarks, we observe that applying EQUILIBRIUM-RANKING to LLaMA-7B outperforms the much larger LLaMA-65B and PaLM-540B models. These results highlight the promise of game-theoretic tools for addressing fundamental challenges of truthfulness and consistency in LMs

    Mode Regularized Generative Adversarial Networks

    Full text link
    Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.Comment: Published as a conference paper at ICLR 201

    AutoReply: Detecting Nonsense in Dialogue Introspectively with Discriminative Replies

    Full text link
    Existing approaches built separate classifiers to detect nonsense in dialogues. In this paper, we show that without external classifiers, dialogue models can detect errors in their own messages introspectively, by calculating the likelihood of replies that are indicative of poor messages. For example, if an agent believes its partner is likely to respond "I don't understand" to a candidate message, that message may not make sense, so an alternative message should be chosen. We evaluate our approach on a dataset from the game Diplomacy, which contains long dialogues richly grounded in the game state, on which existing models make many errors. We first show that hand-crafted replies can be effective for the task of detecting nonsense in applications as complex as Diplomacy. We then design AutoReply, an algorithm to search for such discriminative replies automatically, given a small number of annotated dialogue examples. We find that AutoReply-generated replies outperform handcrafted replies and perform on par with carefully fine-tuned large supervised models. Results also show that one single reply without much computation overheads can also detect dialogue nonsense reasonably well

    Learning Effective and Human-like Policies for Strategic, Multi-Agent Games

    No full text
    We consider the task of building effective but human-like policies in multi-agent decision-making problems. Imitation learning (IL) is effective at predicting human actions but may not match the strength of expert humans, while reinforcement learning (RL) and search algorithms lead to strong performance but may produce policies that are difficult for humans to understand and coordinate with. We first study the problem of producing human-like communication in latent language policies (LLPs), in which high-level instructor and low-level executor agents communicate using natural language. While LLPs can solve long-horizon RL problems, past work has found that LLP training produces agents that use messages in ways inconsistent with their natural language meanings. We introduce a sample-efficient multitask training scheme that yields human-like communication in a complex realtime strategy game. We then turn to the problem of producing human-like decision-making in a more general class of policies. We develop a regret-minimization algorithm for imperfect information games that can leverage human demonstrations. We show that using this algorithm for search in no-press Diplomacy yields a policy that matches the human-likeness of IL while achieving much higher reward. ___________ This thesis is based on the papers, Multitasking Inhibits Semantic Drift published at NAACL 2021 and Modeling Strong and Human-Like Gameplay with KL-Regularized Search which is currently under review for publication at ICML 2022. The contents of this paper are used with the permission of co-authors David J. Wu, Gabriele Farina, Adam Lerer, Hengyuan Hu, Anton Bakhtin, Mike Lewis, Noam Brown, and Jacob Andreas.S.M

    Multitasking Inhibits Semantic Drift

    No full text
    When intelligent agents communicate to accomplish shared goals, how do these goals shape the agents' language? We study the dynamics of learning in latent language policies (LLPs), in which instructor agents generate natural-language subgoal descriptions and executor agents map these descriptions to low-level actions. LLPs can solve challenging long-horizon reinforcement learning problems and provide a rich model for studying task-oriented language use. But previous work has found that LLP training is prone to semantic drift (use of messages in ways inconsistent with their original natural language meanings). Here, we demonstrate theoretically and empirically that multitask training is an effective counter to this problem: we prove that multitask training eliminates semantic drift in a well-studied family of signaling games, and show that multitask training of neural LLPs in a complex strategy game reduces drift and while improving sample efficiency.Comment: NAACL 202
    corecore